The MOOC as Distributed Intelligence: Dimensions of a Framework & Evaluation of MOOCs

نویسندگان

  • Shuchi Grover
  • Paul Franz
  • Emily Schneider
  • Roy Pea
چکیده

Massively Open Online Courses (MOOCs) have been at the center of media attention and hyperbole since 2012. As MOOCs proliferate and continue to influence higher education, it is increasingly pressing to define frameworks for their design and evaluation. This paper proposes such a framework, grounded in CSCL and learning sciences research along with a discussion of the unique aspects of MOOCs as learning environments. Through the lens of distributed intelligence the framework defines distinct, but interconnected, dimensions of MOOCs that must work synergistically to maximize individual as well as collective learning. Introduction and Motivation In all the hyperbole surrounding the rollout of Massively Online Open Courses (MOOCs) over the past year and a half, much has been said and written about the “campus tsunami” (Brooks, 2012) that is purportedly poised to change the face of higher education. Interestingly, while much of the positive feedback has focused on the noble sentiments behind making world-class courses (mostly from elite universities) freely available to anyone, anywhere in the world, a fair amount of the negative press aimed specifically at instructionist MOOCs or xMOOCs (as characterized by Daniel, 2012) has revolved around the quality of the courses themselves. Though this criticism covers the gamut of instructional design issues, (mostly misplaced views of) dropout and completion rates have garnered the most attention. We believe that often both the praise and criticism of MOOCs is founded on historical assumptions about learning environments and outcomes that do not necessarily apply (at least without some reconsideration and reframing) to this new phenomenon. MOOCs today are a moving target—their form and function is shifting weekly, as course designers and platform providers around the world dream up new approaches to open online learning. To remain grounded in this shifting landscape, we need a flexible and generalizable framework for understanding the effects of MOOC design decisions on learning. As a start, we reframe the question What makes a good MOOC? to How can we make a MOOC work for as many of its diverse participants as possible? MOOCs attract a global set of learners with an extensive range of goals and prior knowledge. These individuals vary in the approaches they take to learning, their responses to the social and pedagogical context for learning, and their intrapersonal strategies for dealing with challenges. Framing design and evaluation in this way emphasizes the potential for optimization for different participants or groups of participants—and the possibility of defining different learning outcomes for these different groups of learners. Learning outcomes should also be defined expansively, based on the goals that course designers have to influence cognitive and affective competencies of any subset of learners, or learning on the level of the collective. Furthermore, it helps to view a MOOC as a designed object (Simon, 1969) whose creation should ideally be influenced not only by faculty and instructional designers, but also by technologists, data scientists and learning researchers. These stakeholders influence different elements of the MOOC that interrelate to create learning opportunities for participants. A framework for the design and evaluation of MOOCs must reflect the complex nature of these interrelationships. It must also encapsulate principles from the learning sciences to guide the creation of a robust set of criteria for the design and evaluation of MOOC learning experiences. These criteria will not only help meaningfully frame the discourse on MOOC quality, but also serve prospective learners, course designers and faculty, researchers, as well as the technologists who are charged with developing and evolving the platforms on which MOOCs are deployed to meet needs and enable innovative experimentation. Theoretical and Conceptual Framework Our proposed framework includes a focus on the elements that make MOOCs distinct from previous forms of virtual learning environments (e.g., Barab, Kling & Gray, 2004; Dillenbourg, Schneider & Synteta, 2002; Pea, 1998, 2004; Weiss, Nolan & Trifonas, 2006). We take as given the principles of learning and instructional design established by decades of work on socio-constructivist learning in CSCL and distance learning/e-learning (e.g., Harasim, 2011; Sawyer, 2005; Stahl 2004, 2006;), though identifying the best strategies for implementing these principles in the platform features and instructional strategies of MOOCs is a ripe area for future work. There is no doubt that it is the “M” in MOOCs that underlies and influences the unique nature of the design space. Enabled by being open and online, the massive population of students with varied goals enrolled from various corners of the world requires a reconsideration of instruction and assessment strategies, as well as the possible forms of social interaction. This atypicality also underscores the need for fresh perspectives for design and evaluation that are suited to these learning environments. Moreover, the volume and nature of the data gathered for learning analytics is also on a scale and granularity that could shape learning experiences in hitherto unimaginable ways. We argue that it is the distributed nature of intelligence (Pea, 1993) and the associated learning experiences that is heightened most in MOOCs. Pea’s argument that the resources that shape and enable learning and learner activity “are distributed in configuration across people, environments, and situations” is actualized, even amplified, in MOOC settings, where the designed learning environment embodies the pedagogical assumptions of the technologists and instructors. Additionally, in keeping with Pea’s distributed intelligence framework, MOOCs also exemplify both the social as well as the material dimensions of distributed intelligence. Many of the traditional roles and responsibilities of the teaching team are distributed among learners because of the scale of the MOOC. For example, learners push each other’s understanding through participation in the discussion forum, and assess one another’s work in instances where human feedback is preferable to automatic grading. Learning in a MOOC is also shaped in unconventional ways by the artifacts and affordances of new technology tools such as those that support educational data mining, crowd-sourcing, as well as social and cognitive presence (Garrison, 2007) in the learning environment. Dimensions of a MOOC Design and Evaluation Framework As suggested in Figure 1, our framework for design and evaluation envisions a MOOC as four distinct dimensions across which intelligence is distributed. While the interactive learning environment is at the core of the learning experience, learning by individual participants and of the group as a whole results from a synergistic interplay between each of these dimensions. Faculty, instructional designers, technologists, data scientists, and learning scientists together bring the expertise to shape the environment for optimal learning. Our framework also serves to guide these conversations. The Interactive Learning Environment (ILE) is made up of the core course elements — Content, Instruction (or Pedagogy), Assessment and Community. These elements are initially shaped by the course creators as well as the technical affordances of the course platform. These design choices reflect the assumptions of designers about the ways in which people learn, and should be pushed to reflect the state of the art of knowledge in the learning sciences. For example, many current MOOCs rely on a lecture-style “talking head” delivery mode of content, which presupposes a transmission and acquisition model of education (Rogoff, 1990), rather than supporting alternate learning approaches where learners might instead be tasked with generating their own knowledge, and participating in extra-MOOC, offline learning and reflection experiences as part of their required activities. As the course goes on, learners choose how they interact with these elements of the ILE to fashion an experience to suit their needs. This is manifested most powerfully in the context of the Community element, as learners control their relationship with other learners and the MOOC instructor through their interactions with them, both face-to-face in meet-ups, and online using beyond-MOOC groupware such as Google+. These choices about interaction and assessment are also driven by the learner’s background and intentions, as described below, or in some cases, by other faculty who may be using the MOOC in a blended or flipped mode in their classrooms. The MOOC designer is charged with enabling and enhancing these experiences in a way that best serves the unique needs of individual learners—for example, a MOOC tailored to a college student taking the course for credit in a formal setting or in his/her individual capacity would involve various forms of formative and summative assessment, requirements for participation, and course expectations that would not suit the needs of the casual middle-aged learner taking the course out of curiosity in the subject. Learner background and intention captures the variety of learner purposes for course engagement, which is a byproduct of the open access nature of the courses and the novelty of the medium. Based on surveys we have conducted in some MOOCs, in addition to traditional students taking the course for some form of credit, a large percentage of others are enrolled with purposes as assorted as “curiosity about the topic”, “to sharpen my job skills”, and “fun and challenge.” This pattern implies a need to serve up different courses suited to the varied purposes of MOOC learners: a customized learning approach that could be enabled by analytics on behavioral data from learners, as well as self-reported intentions for MOOC enrollment. This also means that traditional measures of learning outcomes like course completion may not accurately reflect MOOC student engagement. The technology infrastructure comprising the MOOC platform used in conjunction with social media and other technology tools for augmenting communication and interaction powers the MOOC as a whole including its learning analytics engine, and serves to cater to diverse learner needs ranging from geography and language to issues of how the MOOC content is accessed and interacted with (e.g., downloading vs. streaming video). Crucial design decisions include how to leverage technology affordances for achieving the learning objectives of MOOC participants and how data about learners and learner interactions are collected and analyzed to support (even real-time) improvement of both the underlying platform technology and the learning environment. Finally, evidence-based improvement is a meta-MOOC process undergirding design decisions around the ILE and technology infrastructure. Evidence-based improvement is powered by data mining and analytics designed to measure the desired course learning outcomes, and incorporates qualitative evidence from sources like forums and surveys. Evidence-based improvement is important to any learning environment, but it deserves particular attention in MOOC design and evaluation as it provides an opportunity for leveraging the distributed intelligence of the many MOOC stakeholders to create a virtuous iteration cycle leading to improved learning. Leveraging the affordances and intelligence in each of these dimensions will result in a MOOC that strives to work for as many of its diverse learners as possible. In such a MOOC, each dimension interacts meaningfully with every other dimension. Within the limits of this short paper, we highlight one example of the interplay between dimensions, indicating how it influences design and evaluation decisions. An instructor offering a mathematics MOOC may want to use the creation and evaluation of proofs as a key assessment piece. However, because there is no reliable way to machine grade proofs, the instructor decides to leverage the distributed intelligence and learning of the students in the course in a peer assessment system. The technology platform supports this process through a peer assessment module. After a first round of peer assessments, the instructor realizes through analytics and forum posts that students need more support grading one another’s work, and so records a new video modeling the grading process, and incorporates grading exercises into the weekly assessments. In this example the instructor leverages all parts of the learning environment: community grading of peers’ content knowledge, as well as assessment and instruction as a scaffold for peer grading. The instructor leverages the technology infrastructure to make changes, and responds to student backgrounds by providing multiple avenues for learning to peer grade. Finally, the instructor makes a meaningful improvement based on data from analytics and forums. Note that instructor responsiveness is itself a design decision, and the improvements made in situ in this example could have been designed to be driven by analytics and peer assessment algorithms and put in place prior to the start of the course. Thus we see that this framework essentially means pushing downward in the pyramid from faculty to students the responsibilities for collective learning (Pea, 1994) of the MOOC participants as a whole, and design for innovative and hitherto untested technology infrastructure elements such as scaffolding social learning group formation, peer assessment, question-clustering techniques, polling/voting up mechanisms, or karma points (Lewin, 2012) for incentivizing initiative in supporting others’ learning in the MOOC. Technologists are already actively working to augment MOOC platforms with a plethora of products such as tools to support contextual in-text and in-video discussions, formation of study groups and project teams, discussion boards with voting and other features, and ways MOOC learners can connect not only in real time, but also in the real world. Through conscious course design and A/B studies (1) comparing different versions of its implementation, a MOOC could potentially be engineered to maximize the learning of the collective. Further along in the development of MOOCs, this collective maximization may work in tandem with “mass customization” (Salvador, De Holan & Piller, 2009), wherein the course is restructured at the individual level in order to best support each student, and not only the aggregate. Regardless of how MOOCs are optimized in the near-term or projected future, deliberate thought given to the dimensions of MOOCS where data should be collected—and the appropriate techniques for transforming that data into meaningful indicators of learning, engagement, and distributed intelligence—will serve to provide the evidence needed to warrant changes in course design to lead to measurable improvements. Next Steps and Conclusion Crucial next steps that dictate the agenda of our continuing work on this involve outlining granular criteria that capture the various elements of the learning experience in a MOOC designed to leverage intelligence distributed in the different dimensions of the MOOC as described above. Staying true to the “distributed intelligence” perspective also calls for harnessing the experiences and wisdom of MOOC students, faculty and course designers as well in the creation of a robust set of design and evaluation criteria. These criteria would not only serve to evaluate existing MOOCs, but also provide guidelines for the design of future MOOC platform capabilities, and supporting technology tools. They could also inform course evaluation surveys students are expected to fill out after completing a MOOC. Above all, they would help shape a meaningful and timely discourse on MOOC quality.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Potentiality of Dynamic Assessment in Massive Open Online Courses (MOOCs): The Case of Listening Comprehension MOOCs

Massive Open Online Courses (MOOCs) as a new shaking educational development provide the scene for achieving social inclusion and dissemination of knowledge. Anyhow, facilitating network learning experiences through creating an adaptive learning environment can pave the way for this open and energetic way to learning. The present study aimed to explore the possible role of Dynamic Assessment (D...

متن کامل

Examining the Perceived Consequences and Usage of MOOCs on Learning Effectiveness

Massive Open Online Courses (MOOCs) have recently received a great deal of attention from the academic communities. However, these courses face low completion rates and there are very limited research pertaining to this problem. Therefore, this study uses Triandis theory to better understand variables that are indicative of MOOC completion. Furthermore, this study scrutinizes the quantitative r...

متن کامل

Recognition and Analysis of Massive Open Online Courses (MOOCs) Aesthetics for the Sustainable Education

The present study was conducted to recognize and analyze the Massive Open Online Course (MOOC) aesthetics for sustainable education. For this purpose, two methods of the exploratory search (qualitative) and the questionnaire (quantitative) were used for data collection. The research sample in the qualitative section included the electronic resources related to the topic and in the quantitative ...

متن کامل

Intelligent Conversational Bot for Massive Online Open Courses (MOOCs)

Massive Online Open Courses (MOOCs) which were introduced in 2008 has since drawn attention around the world for both its advantages as well as criticism on its drawbacks. One of the issues in MOOCs which is the lack of interactivity with the instructor has brought conversational bot into the picture to fill in this gap. In this study, a prototype of MOOCs conversational bot, MOOC-bot is being ...

متن کامل

Investigating Massive Open Online Courses (MOOCS) Opportunities for Developing Countries: Case of Papua New Guinea

The proliferation of massive open online courses or MOOCs has proven disruptive to the traditional educational enterprise. While there is an on going debate about the future of MOOCs, it has already shown effective results for particular cohort of learners. This research investigates the adoption of MOOCs as a disruptive initiative in developing countries, and in particular in Papua New Guinea ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013